Learning modular representations from global sparse coding networks
نویسندگان
چکیده
منابع مشابه
Learning Sparse Representations in Reinforcement Learning with Sparse Coding
A variety of representation learning approaches have been investigated for reinforcement learning; much less attention, however, has been given to investigating the utility of sparse coding. Outside of reinforcement learning, sparse coding representations have been widely used, with non-convex objectives that result in discriminative representations. In this work, we develop a supervised sparse...
متن کاملLearning Word Representations with Hierarchical Sparse Coding
We propose a new method for learning word representations using hierarchical regularization in sparse coding inspired by the linguistic study of word meanings. We show an efficient learning algorithm based on stochastic proximal methods that is significantly faster than previous approaches, making it possible to perform hierarchical sparse coding on a corpus of billions of word tokens. Experime...
متن کاملSmooth Sparse Coding via Marginal Regression for Learning Sparse Representations
We propose and analyze a novel framework for learning sparse representations, based on two statistical techniques: kernel smoothing and marginal regression. The proposed approach provides a flexible framework for incorporating feature similarity or temporal information present in data sets, via non-parametric kernel smoothing. We provide generalization bounds for dictionary learning using smoot...
متن کاملSparse Coding Neural Gas: Learning of overcomplete data representations
We consider the problem of learning an unknown (overcomplete) basis from data that are generated from unknown and sparse linear combinations. Introducing the Sparse Coding Neural Gas algorithm, we show how to employ a combination of the original Neural Gas algorithm and Oja’s rule in order to learn a simple sparse code that represents each training sample by only one scaled basis vector. We gen...
متن کاملClass-Specific Sparse Coding for Learning of Object Representations
We present two new methods which extend the traditional sparse coding approach with supervised components. The goal of these extensions is to increase the suitability of the learned features for classification tasks while keeping most of their general representation performance. A special visualization is introduced which allows to show the principal effect of the new methods. Furthermore some ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: BMC Neuroscience
سال: 2010
ISSN: 1471-2202
DOI: 10.1186/1471-2202-11-s1-p131